91 research outputs found
Making Random Choices Invisible to the Scheduler
When dealing with process calculi and automata which express both
nondeterministic and probabilistic behavior, it is customary to introduce the
notion of scheduler to solve the nondeterminism. It has been observed that for
certain applications, notably those in security, the scheduler needs to be
restricted so not to reveal the outcome of the protocol's random choices, or
otherwise the model of adversary would be too strong even for ``obviously
correct'' protocols. We propose a process-algebraic framework in which the
control on the scheduler can be specified in syntactic terms, and we show how
to apply it to solve the problem mentioned above. We also consider the
definition of (probabilistic) may and must preorders, and we show that they are
precongruences with respect to the restricted schedulers. Furthermore, we show
that all the operators of the language, except replication, distribute over
probabilistic summation, which is a useful property for verification
Constructing elastic distinguishability metrics for location privacy
With the increasing popularity of hand-held devices, location-based
applications and services have access to accurate and real-time location
information, raising serious privacy concerns for their users. The recently
introduced notion of geo-indistinguishability tries to address this problem by
adapting the well-known concept of differential privacy to the area of
location-based systems. Although geo-indistinguishability presents various
appealing aspects, it has the problem of treating space in a uniform way,
imposing the addition of the same amount of noise everywhere on the map. In
this paper we propose a novel elastic distinguishability metric that warps the
geometrical distance, capturing the different degrees of density of each area.
As a consequence, the obtained mechanism adapts the level of noise while
achieving the same degree of privacy everywhere. We also show how such an
elastic metric can easily incorporate the concept of a "geographic fence" that
is commonly employed to protect the highly recurrent locations of a user, such
as his home or work. We perform an extensive evaluation of our technique by
building an elastic metric for Paris' wide metropolitan area, using semantic
information from the OpenStreetMap database. We compare the resulting mechanism
against the Planar Laplace mechanism satisfying standard
geo-indistinguishability, using two real-world datasets from the Gowalla and
Brightkite location-based social networks. The results show that the elastic
mechanism adapts well to the semantics of each area, adjusting the noise as we
move outside the city center, hence offering better overall privacy
Differential Privacy versus Quantitative Information Flow
Differential privacy is a notion of privacy that has become very popular in
the database community. Roughly, the idea is that a randomized query mechanism
provides sufficient privacy protection if the ratio between the probabilities
of two different entries to originate a certain answer is bound by e^\epsilon.
In the fields of anonymity and information flow there is a similar concern for
controlling information leakage, i.e. limiting the possibility of inferring the
secret information from the observables. In recent years, researchers have
proposed to quantify the leakage in terms of the information-theoretic notion
of mutual information. There are two main approaches that fall in this
category: One based on Shannon entropy, and one based on R\'enyi's min entropy.
The latter has connection with the so-called Bayes risk, which expresses the
probability of guessing the secret. In this paper, we show how to model the
query system in terms of an information-theoretic channel, and we compare the
notion of differential privacy with that of mutual information. We show that
the notion of differential privacy is strictly stronger, in the sense that it
implies a bound on the mutual information, but not viceversa
Making Random Choices Invisible to the Scheduler
International audienceWhen dealing with process calculi and automata which express both nondeterministic and probabilistic behavior, it is customary to introduce the notion of scheduler to resolve the nondeterminism. It has been observed that for certain applications, notably those in security, the scheduler needs to be restricted so not to reveal the outcome of the protocol's random choices, or otherwise the model of adversary would be too strong even for ``obviously correct'' protocols. We propose a process-algebraic framework in which the control on the scheduler can be specified in syntactic terms, and we show how to apply it to solve the problem mentioned above. We also consider the definition of (probabilistic) may and must preorders, and we show that they are precongruences with respect to the restricted schedulers. Furthermore, we show that all the operators of the language, except replication, distribute over probabilistic summation, which is a useful property for verification
A differentially private mechanism of optimal utility for a region of priors
International audienceThe notion of differential privacy has emerged in the area of statistical databases as a measure of protection of the participants' sensitive information, which can be compromised by selected queries. Differential privacy is usually achieved by using mechanisms that add random noise to the query answer. Thus, privacy is obtained at the cost of reducing the accuracy, and therefore the "utility", of the answer. Since the utility depends on the user's side information, commonly modelled as a prior distribution, a natural goal is to design mechanisms that are optimal for every prior. However, it has been shown that such mechanisms do not exist for any query other than counting queries. Given the above negative result, in this paper we consider the problem of identifying a restricted class of priors for which an optimal mechanism does exist. Given an arbitrary query and a privacy parameter, we geometrically characterise a special region of priors as a convex polytope in the priors space. We then derive upper bounds for utility as well as for min-entropy leakage for the priors in this region. Finally we define what we call the "tight-constraints mechanism" and we discuss the conditions for its existence. This mechanism has the property of reaching the bounds for all the priors of the region, and thus it is optimal on the whole region
Geo-indistinguishability: A Principled Approach to Location Privacy
International audienceIn this paper we report on our ongoing project aimed at protecting the privacy of the user when dealing with location-based services. The starting point of our approach is the principle of geo-indistinguishability, a formal notion of privacy that protects the user’s exact location, while allowing approximate information – typically needed to obtain a certain desired service – to be released. We then present two mechanisms for achieving geo-indistinguishability, one generic to sanitize locations in any setting with reasonable utility, the other custom-built for a limited set of locations but providing optimal utility. Finally we extend our mechanisms to the case of location traces, where the user releases his location repeatedly along the day and we provide a method to limit the degradation of the privacy guarantees due to the correlation between the points. All the mechanisms were tested on real datasets and compared both among themselves and with respect to the state of the art in the field
Compositional Methods for Information-Hiding
International audienceProtocols for information-hiding often use randomized primitives to obfuscate the link between the observables and the information to be protected. The degree of protection provided by a protocol can be expressed in terms of the probability of error associated to the inference of the secret information. We consider a probabilistic process calculus approach to the specification of such protocols, and we study how the operators affect the probability of error. In particular, we characterize constructs that have the property of not decreasing the degree of protection, and that can therefore be considered safe in the modular construction of protocols. As a case study, we apply these techniques to the Dining Cryptographers, and we are able to derive a generalization of Chaum's strong anonymity result
Up-To Techniques for Generalized Bisimulation Metrics
Bisimulation metrics allow us to compute distances between the behaviors of probabilistic systems. In this paper we present enhancements of the proof method based on bisimulation metrics, by extending the theory of up-to techniques to (pre)metrics on discrete probabilistic concurrent processes.
Up-to techniques have proved to be a powerful proof method for showing that two systems are bisimilar, since they make it possible to build (and thereby check) smaller relations in bisimulation proofs. We define soundness conditions for up-to techniques on metrics, and study compatibility properties that allow us to safely compose up-to techniques with each other. As an example, we derive the soundness of the up-to-bisimilarity-metric-and-context technique.
The study is carried out for a generalized version of the bisimulation metrics, in which the Kantorovich lifting is parametrized with respect to a distance function. The standard bisimulation metrics, as well as metrics aimed at capturing multiplicative properties such as differential privacy, are specific instances of this general definition
Compositional methods for information-hiding
International audienceSystems concerned with information hiding often use randomization to obfuscate the link between the observables and the information to be protected. The degree of protection provided by a system can be expressed in terms of the probability of error associated with the inference of the secret information. We consider a probabilistic process calculus to specify such systems, and we study how the operators affect the probability of error. In particular, we characterize constructs that have the property of not decreasing the degree of protection, and that can therefore be considered safe in the modular construction of these systems. As a case study, we apply these techniques to the Dining Cryptographers, and we derive a generalization of Chaum's strong anonymity result
Anonymity Protocols as Noisy Channels
International audienceWe consider a framework in which anonymity protocols are interpreted as noisy channels in the information-theoretic sense, and we explore the idea of using the notion of capacity as a measure of the loss of anonymity. Such idea was already suggested by Moskowitz, Newman and Syverson, in their analysis of the covert channel that can be created as a result of non-perfect anonymity. We consider the case in which some leak of information is intended by design, and we introduce the notion of conditional capacity to rule out this factor, thus retrieving a natural correspondence with the notion of anonymity. Furthermore, we show how to compute the capacity and the conditional capacity when the anonymity protocol satisfies certain symmetries. We also investigate how the adversary can test the system to try to infer the user's identity, and we study how his probability of success depends on the characteristics of the channel. We then illustrate how various notions of anonymity can be expressed in this framework, and show the relation with some definitions of probabilistic anonymity in literature. Finally, we show how to compute the matrix of the channel (and hence the capacity and conditional capacity) using model checking
- …